Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Windows support #4045

Closed
wants to merge 7 commits into from
Closed

Conversation

mantaionut
Copy link

@mantaionut mantaionut commented May 31, 2024

Implemented support for Windows based on #2738 with MSVC in order for torch.compile to work on GPU on Windows pytorch/pytorch#122094.
The build was done by building LLVM directly according to the documentation.
Tested by running the unit tests. The failures happen on my Linux machine too, most of them due to not enough GPU memory.

=== 32 failed, 9811 passed, 1530 skipped, 91 warnings in 1192.69s (0:19:52) ===

Running the Unit Test from https://triton-lang.org/main/getting-started/tutorials/03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py I got
Screenshot 2024-06-07 165223

  • PR Description is written in clear, idiomatic English and follows the
    rules for a good PR description.

    (The LLM of your choice can help copyedit your PR description. You can even
    give it your whole patch to analyze.)

  • Pre-commit checks pass.

    pre-commit install
    pre-commit run --all-files
  • Tests have been added and/or updated.

    • For changes to the backend: /test/ (for lit), /unittest/ (for
      gtest), or occasionally end-to-end tests like in
      /python/test/unit/language/test_core.py.
    • For changes to the frontend: /python/test/
  • Documentation

    • The code contains comments where appropriate, written in clear,
      idiomatic English. Again, an LLM can help.
    • If appropriate, the Triton documentation have been updated.

@ptillet
Copy link
Collaborator

ptillet commented May 31, 2024

Hello! Thank you for the PR but we don't have the bandwidth to commit to supporting Windows at this time. Would you mind maintaining a fork?

wkpark added 5 commits June 7, 2024 12:42
 * based on triton-lang#2465
 * manually applied, rebased, fix lint errors
 * use set_target_properties(), cleanup for windows
 * remove '/A' platform option to use windows ninja
 * remove unknown option '/m'
 * use sysconfig.get_config_var() to get the path of python*.lib
 * clang fix for windows
 * remove '-fPIC' for windows clang
 * fix download_and_copy() to support windows
 * add "exe" extension for windows
 * use "pyd" extension for windows to make importlib work
 * rework for latest triton (2024/01/14)

Original-author-by: Andrei Gheorghe <andrei@dharmaventures.co>
Signed-off-by: Won-Kyu Park <wkpark@gmail.com>
 * based on Windows support PR triton-lang#2456 by @andreigh
 * WIN32 fix using LoadLibrary
 * win32 fix _path_to_binary()
 * add library_dir, include_dir for win32
@mantaionut mantaionut marked this pull request as ready for review June 7, 2024 14:50
@ptillet
Copy link
Collaborator

ptillet commented Jun 7, 2024

(closing the PR following the above comment.)

@parlance-zz
Copy link

Very disappointing to see this closed without merging the changes

@FurkanGozukara
Copy link

this is so shameless to be getting closed. Shame on you open AI. it is your duty to support Windows and you are closing this one!

aren't you the one getting billions from Microsoft and you are not supporting the biggest product of Microsoft!

only left library that doesnt give support to Windows is triton now!

@FurkanGozukara
Copy link

Are there any wheels that can be installed on Windows for Python 3.10? Please reply me thank you so much

@FurkanGozukara
Copy link

@mantaionut @wkpark do you have aynwhere instructions how to compile your fork? or anywhere precompiled wheels like for python 3.10?

I would appreciate very much

@ptillet
Copy link
Collaborator

ptillet commented Jul 26, 2024

There is a big difference between having a commit that compiles/run tests with MSVC and committing to actually supporting Windows. Merging this PR wouldn't have addressed the root cause behind our decision: the core Triton team doesn't have the bandwidth to fix future Windows issues. This means that we can't have Windows CI -- and that Triton would keep breaking on the main branch -- which wouldn't actually do a service to the community. I think that Windows user will be best served teaming up to maintain a fork that is guaranteed to work on every commit.

I should have said that more clearly when I closed the PR. My intention was not to dismiss the work that was done by the author of this PR.

@FurkanGozukara
Copy link

@ptillet currently a major open source app CogVLM 2 is unusable since OpenAI don't try to help people. OpenAI has billions of dollars. My words are for OpenAI. Also they are getting billions from Microsoft. I find this situation unacceptable

xFormers, DeepSpeed, BitsAndBytes and all others are starting to give full support to Windows except Triton

@Systemcluster
Copy link

@ptillet I think there is value in supporting Windows on a "no-effort" basis and letting the community do the rest. You (the Triton team, or external reviewers) only have to sporadically review PR's when something needs to be changed. Make the Windows CI and wheels optional, and in case a release doesn't compile on Windows, downstream users or libraries only need to pin a lower version.

There is a level between "providing support" and "denying contributions" that I think would satisfy both sides.

@FurkanGozukara
Copy link

@Systemcluster well said

@stellaraccident
Copy link

stellaraccident commented Aug 8, 2024

There is a big difference between having a commit that compiles/run tests with MSVC and committing to actually supporting Windows. Merging this PR wouldn't have addressed the root cause behind our decision: the core Triton team doesn't have the bandwidth to fix future Windows issues. This means that we can't have Windows CI -- and that Triton would keep breaking on the main branch -- which wouldn't actually do a service to the community. I think that Windows user will be best served teaming up to maintain a fork that is guaranteed to work on every commit.

I should have said that more clearly when I closed the PR. My intention was not to dismiss the work that was done by the author of this PR.

One potential other middle ground that we do on adjacent projects is to maintain a post-submit Windows bot (i.e. just for visibility, doesn't block anything). We do that for a variety of less than fully supported platforms. But it lets the community fast follow and plan sane work for making releases. My experience over time is that sometimes eventually (which can be years), such platforms can be promoted to more of a default-on case and maintained without much/any effort.

Native Windows users/communities are used to this kind of dichotomy. It helps them immensely to be able to land basic patches in the main project and have some health signal.

(I don't have a horse in this race -- just sharing an approach that has worked for me on other projects when navigating the issue of Windows and misc config support)

@iperov
Copy link

iperov commented Aug 19, 2024

forget triton, forget pytorch
Jax is the future.

@umarbutler
Copy link

umarbutler commented Sep 25, 2024

For anyone else wondering whether its worth your time to install triton on WSL2, given its overhead, the answer is yes. It managed to shave 2 days off an on-going train, dropping its ETA from 9.5 days on native Windows 11 to 7.5 days.

Just make sure you store your data and scripts inside your WSL2 instance: microsoft/WSL#4197

I've spent far too many hours trying to build Triton 3 myself, including using the repos of @wkpark, @mantaionut and @eaplatanios, but no dice.

@wkpark used to have wheels available, however, they are all expired (unavailable from GitHub and there is no mirror on the Wayback Machine), and they were for Python 3.10 and 3.11, not 3.12.

@ACGNnsj has Python 3.11 and 3.12 wheels available here, however, I have not tested them myself and so cannot vouch for them either way.

Based on the most recent comments made by the maintainers of Triton on the question of its compatibility with Windows, it does not appear that there is any real motivation to invest any degree of effort, however small, into supporting Windows.

Seeing as how @xuhancn is hard at work on getting PyTorch Inductor to work on CPU on Windows (pytorch/pytorch#124245), and also that it is now possible to have DeepSpeed, bitsandbytes, flash attention and xformers all installed on Windows, I am hopeful that we will have some sort of alternative to Triton, at least when it comes to compiling PyTorch models (which is my primary use for it), that is just as platform-agonstic as the rest of PyTorch.

See also pytorch/pytorch#122094

@FurkanGozukara
Copy link

it is just total shame for OpenAI taking 10s of billions from Microsoft and not giving support to Windows, while all others are fully supporting Windows

I wish researchers would completely abandon using Triton in their research and apps

@FurkanGozukara
Copy link

Thank you OpenAI taking 10s billions from Microsoft

image

@synystersocks
Copy link

this is fairly shocking, windows is the most used os in the world for entreprise, commercial and consumer. @openai - if your excuse is you do not have time, bandwidth, processing power, energy, ect....... are you taking the piss? lets be honest Cough - Monopoly - Cough is a great game... your company is swiftly losing trust, lest you not be another intel, consider supporting the communities not sabotaging them.

@james-banks
Copy link

this is fairly shocking, windows is the most used os in the world for entreprise, commercial and consumer. @openai - if your excuse is you do not have time, bandwidth, processing power, energy, ect....... are you taking the piss? lets be honest Cough - Monopoly - Cough is a great game... your company is swiftly losing trust, lest you not be another intel, consider supporting the communities not sabotaging them.

This level of anger and mud slinging isn't helpful.

I'm not sure what you mean by the terms "enterprise, commercial" above, but nobody is serving LLMs, VLMs and other inference via Windows servers. Nor are they training cutting edge models on it.

I fully agree with the above comments about finding a middle-ground for support - not just denying PRs. But you can't expect a full feature-parity day0 release for Windows every update when it's not the target OS for most work using Triton.

@Vigilence
Copy link

Would appreciate windows support.

@CrazyMonkeyCM2
Copy link

Hear me out.. I think windows is gunna be really popular soon and maybe supporting it would be good?

@DragonQuix
Copy link

Would appreciate windows support, too.

@umarbutler
Copy link

Nor are they training cutting edge models on it.

@james-banks I’d dispute this. Windows might not be the most popular but it is used. I regularly train large-scale models on my Windows PC with a 4090, though sometimes via WSL2 to use Triton.

@Nojahhh
Copy link

Nojahhh commented Oct 12, 2024

“Not enough bandwidth” is not an excuse. Seeing that OpenAI depends on Microsoft and not willing to find a way to support their largest contributing partners main product which also happens to be the most widely adopted consumer OS in the world is not only shocking but also somewhat disgraceful.

The job is halfway done by the community already. Merge this PR and adopt more bandwidth for your triton research team.

This shouldn’t even be considered a request but an obligation.

@mr-lab
Copy link

mr-lab commented Oct 12, 2024

shame on you , like someone have out the effort to it , and you simply done him dirty . shame .
come leach of opensource project later ,...

@woct0rdho
Copy link

Just want to share that I've successfully installed triton on Windows and called torch.compile:
jakaline-dev/Triton_win#2

@Gyro0o
Copy link

Gyro0o commented Oct 12, 2024

As the representative of GPU poor people
I demand support for Windows
Thanks.

@FurkanGozukara
Copy link

All these libraries are fully supporting Windows. You guys have way bigger budget than all of them. OpenAI is valued over 100,000,000,000 $$$ - 100 billion USD

Support Windows with pre compiled wheels not hard task, You are supposed to be Open AI, Open

DeepSpeed, Accelerate, TensorRT, ONNX, Bitsandbytes, xFormers

@mr-september
Copy link

This level of anger and mud slinging isn't helpful.

What is? Civility and even literal free dev work has gotten us nowhere.

@HarshDurgude
Copy link

plz make triton supported for windows

@meanin2
Copy link

meanin2 commented Oct 13, 2024

triton windows!

@shivshankar11
Copy link

Desperately waiting for windows support.

@seoeaa
Copy link

seoeaa commented Oct 13, 2024

This is very frustrating that the PR for adding Windows support for Triton was closed without merging. Windows is the most widely used operating system in the world for corporate, commercial, and consumer applications. Are the excuses from OpenAI about lack of time, bandwidth, compute power, and energy really justifiable? Let's be honest - "Monopoly" is a great game... Your company is rapidly losing trust if you don't become a different Intel and start supporting communities, not sabotaging them.

I understand that this level of anger and dirty attacks is not helpful. But I'm not sure what you mean by the terms "enterprise, commercial" - no one is serving LLMs, VLMs, and other inference systems through Windows servers. And they're not even training advanced models on it.

Nevertheless, I fully agree with the previous comments about the need to find a middle ground for support - not just rejecting merge requests. But one can't expect full feature parity from day one for Windows in every Triton update, when it's not the target OS for most Triton-using workloads.

Windows users are persistently requesting support. Some have even done a lot of work to implement this support themselves. It would be reasonable to consider their contribution and find ways to integrate it, even if it's done on a "best effort" basis by the core Triton team.

Openai has significant resources and collaborates with Microsoft. Rejecting community work and not trying to find a reasonable compromise on this issue seems unjustified. Windows support could open up Triton to new users and opportunities.

I urge OpenAI to reconsider their position and find a path to constructive collaboration with the community on this issue. This will benefit all stakeholders

@triton-lang triton-lang locked as too heated and limited conversation to collaborators Oct 14, 2024
@ThomasRaoux
Copy link
Collaborator

I'll try to summarize what has been said before on multiple threads.

The OSS Triton work is done by maintainers on our free time, that's not our full time job and we do it because we want to help the industry and we believe in OSS.

If someone is interested we offered to have a repo in triton-lang that would add windows support, it can be maintained downstream and have its own build bot like we do for triton-cpu backend. If there are reasonable patches that can then help generalize things we are usually open to accept those. I assume with some work it can come down to minimal amount of changes.

If someone is interested in taking this on feel free to contact me on Discord or Slack (same user name as my github)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.